“Sorry, I can’t do that”: Developing Mechanisms to Appropriately Reject Directives in Human-Robot Interactions

نویسندگان

  • Gordon Briggs
  • Matthias Scheutz
چکیده

Future robots will need mechanisms to determine when and how it is best to reject directives that it receives from human interlocutors. In this paper, we briefly present initial work that has been done in the DIARC/ADE cognitive robotic architecture to enable a directive rejection and explanation mechanism, showing its operation in a simple HRI scenario. Introduction and Motivation An ongoing goal at the intersection of artificial intelligence (AI), robotics, and human-robot interaction (HRI) is to create autonomous agents that can assist and interact with human teammates in natural and human-like ways. This is a multifaceted challenge, involving both the development of an ever-expanding set of capabilities (both physical and algorithmic) such that robotic agents can autonomously engage in a variety of useful tasks, as well as the development of interaction mechanisms (e.g. natural language capabilities) such that humans can direct these robots to perform these tasks in an efficient manner (Scheutz et al. 2007). That is to say, much of the research at the intersection of AI, robotics, and HRI, is concerned with enabling robots to receive a command to “Do X ,” and to be able to both understand and successful carry out such commands over an increasingly large set of tasks. However, there also exists a dual challenge, which has heretofore not received as much attention in AI/HRI research. This challenge pertains to the fact that as the set of capabilities of robotic agents increase in general, so too will human expectations about the capabilities of individual robotic agents, as well as the set of actions that robotic agents are capable of performing, but which situational context would deem inappropriate. Therefore, future robots will need mechanisms to determine when and how it is best to reject directives that it receives from interlocutors. Indeed, humans reject directives for a wide range of reasons: from inability all the way to moral qualms. Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse (lack of knowledge or lack of ability). However, as the abilities of autonomous agents continue Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions, resulting in some initial work that has proposed architectural and reasoning mechanisms to enable such determinations (Arkin 2009; Bringsjord, Arkoudas, and Bello 2006). What is still missing, however, is a general, integrated, set of architectural mechanisms in cognitive robotic architectures that are able to determine whether a directive should be accepted or rejected over the space of all possible excuse categories (and generate the appropriate rejection explanation). In this paper, we briefly present initial work that has been done in the DIARC/ADE cognitive robotic architecture (Schermerhorn et al. 2006; Kramer and Scheutz 2006) to enable such a rejection and explanation mechanism. First we discuss the theoretical considerations behind this challenge, specifically the conditions that must be met for a directive to be appropriately accepted. Next, we briefly present some of the explicit reasoning mechanisms developed in order to facilitate these successful interactions. Finally, we present an example interaction that illustrate these mechanisms at work in simple HRI scenarios. Reasoning about Felicity Conditions Understanding directives (or any other form of speech act) can be thought of as a subset of behaviors necessary for achieving mutual understanding (common ground) between interactants. Theoretical work in conversation and dialogue has conceived of the process of establishing common ground as a multi-stage one (Clark 1996). The first stage is the attentional stage, in which both interactants are successfully attending to each another in a conversational context. The second stage is a perceptual one, in which the addressee successfully perceives a communicative act directed to him/her by the speaker. The third stage is one of semantic understanding, where the perceived input from the second stage is associated with some literal meaning. Finally, the fourth stage is one of intentional understanding, which Clark (1996) terms uptake. This stage goes beyond the literal semantics of an observed utterance to infer what the speakers intentions are in the joint context. While Clark’s multi-stage model of establishing common ground is valuable in conceptualizing the challenges involved, it can be even further refined. Schloder (2014) proposes that uptake be divided into both weak and strong forms. Weak uptake can be associated with the intentional understanding process found in (Clark 1996), whereas strong uptake denotes the stage where the addressee may either accept or reject the proposal implicit in the speakers action. A proposal is not strongly “taken up” unless it has been accepted as well as understood (Schlöder 2014). This distinction is important, as the addressee can certainly understand the intentions of an indirect request such as, “Could you deliver the package?” but this does not necessarily mean that the addressee will actually agree to the request and carry it out. In order for the proposal to be accepted, the necessary felicity conditions must hold. Below we articulate a set of key categories of felicity conditions that must hold in order for a proposal to be explicitly accepted by a robotic agent: 1. Knowledge : Do I know how to do X? 2. Capacity : Am I physically able to do X now? Am I normally physically able to do X? 3. Goal priority and timing : Am I able to do X right now? 4. Social role and obligation : Am I obligated based on my social role to do X? 5. Normative permissibility : Does it violate any normative principle to do X? To be sure, being able to reason about and address these felicity conditions to the same degree a human agent would be able to will remain an open research challenge for the foreseeable future. For instance, the ability of a robotic agent to learn new capabilities and tasks greatly complicates the issue of rejecting a directive based on ignorance (category 1). In this case, when the robot does not know how to do X , it ought to additionally reason about whether or not it is able to learn X , from whom it is able to learn X , and how long it would take to learn X (relative to the task completion time expectations of the interlocutor), which are all challenging questions in themselves. Regardless, it is still important for future robotic agents to be able to reason at least in a rudimentary way about all these felicity conditions. As mentioned previously, there does exist a variety of work that has focused on the challenge of generating excuses for the first few felicity conditions. For example, there exist some previous work on generating excuses for sets of directives that are impossible to satisfy (Raman et al. 2013). Additionally, machine ethicists are interested in developing mechanisms to reason about category 5. However, there still does not exist an architecture able to address all of these categories. Below we introduce the mechanisms in the DIARC/ADE architecture that are designed to begin to meet this challenge. Architectural Mechanisms When the robot is instructed by a human interaction partner (whom we will denote β) to achieve some goal φ, the robot will infer based on the NL understanding mechanisms found in (Briggs and Scheutz 2013) that want(β, do(self, φ)). The robot then engages in a reasoning process illustrated in Figure 1 to determine when and how to reject the potential Generate Acceptance/Rejection Process Utterance Implications B Check Implied Predicates Any? p = want(A,X) Generate Rejection Based on Failure Information Generate General Rejection (“Sorry, I cannot do that.”) Generate Ignorance Rejection (“Sorry, I do not know how to do that.”) Query Goal Manager Any scripts/plans to achieve X? Generate Rejection Based on Belief Reasoning Y N Query Belief Any resulting Goals? G = goal(self,X,P) N

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

When Do We Feel Sorry for Others? : An Externality of Lake Use as an Example

The purpose of the study was to verify, through the use of an experimental method, the assumption that the ‘economic human’ pays more attention to the externality he/she causes as the strength of externality increases. We used a social-experiment design within an undergraduate classroom to test assumptions, using statistical method. A lakeside plant was used as an example. Our results confirmed...

متن کامل

Enabling Robots to Understand Indirect Speech Acts in Task-Based Interactions

An important open problem for enabling truly taskable robots is the lack of task-general natural language mechanisms within cognitive robot architectures that enable robots to understand typical forms of human directives and generate appropriate responses. In this paper, we first provide experimental evidence that humans tend to phrase their directives to robots indirectly, especially in social...

متن کامل

Vulnerable Bodies in Human–Robot Interactions: Embodiment as Ethical Issue in Robot Care for the Elderly

The aim of this paper is to investigate the notion of embodiment in robot technologies for eldercare, drawing on the phenomenology of the body and discussions of practical nursing ethics. Reaching beyond dualistic discourse on aging bodies, we aim to develop a new ethical framework in which lived bodies and embodied care practices play a dominant role in interpreting moral values of human care....

متن کامل

Human T Lymphotropic Virus Type I (HTLV-I) Oncogenesis: Molecular Aspects of Virus and Host Interactions in Pathogenesis of Adult T cell Leukemia/Lymphoma (ATL)

    The study of tumor viruses paves the way for understanding the mechanisms of virus pathogenesis, including those involved in establishing infection and dissemination in the host tumor affecting immune-compromised patients. The processes ranging from viral infection to progressing malignancy are slow and usually insufficient for establishment of transformed cells that develop cancer in only ...

متن کامل

Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions

For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately durin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015